Parameter Learning of Bayesian Network with Multiplicative Synergistic Constraints

نویسندگان

چکیده

Learning the conditional probability table (CPT) parameters of Bayesian networks (BNs) is a key challenge in real-world decision support applications, especially when there are limited data available. The traditional approach to this introducing domain knowledge/expert judgments that encoded as qualitative parameter constraints. In paper, we focus on multiplicative synergistic negative synergy constraint and positive paper symmetric. order integrate constraints into learning process Network parameters, propose four methods deal with based idea classical isotonic regression algorithm. simulated by using lawn moist model Asia network, compared them maximum likelihood estimation (MLE) Simulation results show proposed superior MLE algorithm accuracy learning, which can improve obtain more accurate estimators parameters. reduce dependence expert experiences. Combining these under small sample conditions.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Bayesian Network Learning with Parameter Constraints

The task of learning models for many real-world problems requires incorporating domain knowledge into learning algorithms, to enable accurate learning from a realistic volume of training data. This paper considers a variety of types of domain knowledge for constraining parameter estimates when learning Bayesian Networks. In particular, we consider domain knowledge that constrains the values or ...

متن کامل

Parameter Learning of Bayesian Network Classifiers Under Computational Constraints

We consider online learning of Bayesian network classifiers (BNCs) with reduced-precision parameters, i.e. the conditional-probability tables parameterizing the BNCs are represented by low bit-width fixedpoint numbers. In contrast to previous work, we analyze the learning of these parameters using reduced-precision arithmetic only which is important for computationally constrained platforms, e....

متن کامل

Bayesian Network Parameter Learning using EM with Parameter Sharing

This paper explores the e↵ects of parameter sharing on Bayesian network (BN) parameter learning when there is incomplete data. Using the Expectation Maximization (EM) algorithm, we investigate how varying degrees of parameter sharing, varying number of hidden nodes, and di↵erent dataset sizes impact EM performance. The specific metrics of EM performance examined are: likelihood, error, and the ...

متن کامل

Parameter learning but not structure learning: a Bayesian network model of constraints on early perceptual learning.

Visual scientists have shown that people are capable of perceptual learning in a large variety of circumstances. Are there constraints on such learning? We propose a new constraint on early perceptual learning, namely, that people are capable of parameter learning-they can modify their knowledge of the prior probabilities of scene variables or of the statistical relationships among scene and pe...

متن کامل

An empirical study of Bayesian network parameter learning with monotonic influence constraints

Learning the conditional probability table (CPT) parameters of Bayesian networks (BNs) is a key challenge in real-world decision support applications, especially when there are limited data available. A conventional way to address this challenge is to introduce domain knowledge/expert judgments that are encoded as qualitative parameter constraints. In this paper we focus on a class of constrain...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Symmetry

سال: 2022

ISSN: ['0865-4824', '2226-1877']

DOI: https://doi.org/10.3390/sym14071469